翻訳と辞書
Words near each other
・ Feeder cattle
・ Feeder Dam Bridge
・ Feeder discography
・ Feeder fish
・ Feeder fund
・ Feeder judge
・ Feeder line
・ Feeder line (manufacturing)
・ Feeder line (network)
・ Feeder of lice
・ Feeder ride
・ Feeder ship
・ Feeders (film)
・ Feedforward
・ Feedforward (management)
Feedforward neural network
・ Feedforward, Behavioral and Cognitive Science
・ Feedin' the Kiddie
・ Feeding (disambiguation)
・ Feeding America
・ Feeding behaviour of Tyrannosaurus
・ Feeding disorder
・ Feeding Everyone No Matter What
・ Feeding Fingers
・ Feeding Frenzy
・ Feeding frenzy
・ Feeding Frenzy (album)
・ Feeding Frenzy (Magic City)
・ Feeding Frenzy (TV series)
・ Feeding Frenzy (video game)


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Feedforward neural network : ウィキペディア英語版
Feedforward neural network

A feedforward neural network is an artificial neural network where connections between the units do ''not'' form a cycle. This is different from recurrent neural networks.
The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
==Single-layer perceptron==
(詳細はactivation function are also called ''artificial neurons'' or ''linear threshold units''. In the literature the term ''perceptron'' often refers to networks consisting of just one of these units. A similar neuron was described by Warren McCulloch and Walter Pitts in the 1940s.
A perceptron can be created using any values for the activated and deactivated states as long as the threshold value lies between the two. Most perceptrons have outputs of 1 or -1 with a threshold of 0 and there is some evidence that such networks can be trained more quickly than networks created from nodes with different activation and deactivation values.
Perceptrons can be trained by a simple learning algorithm that is usually called the ''delta rule''. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.
Single-unit perceptrons are only capable of learning linearly separable patterns; in 1969 in a famous monograph entitled ''Perceptrons'', Marvin Minsky and Seymour Papert showed that it was impossible for a single-layer perceptron network to learn an XOR function. It is often believed that they also conjectured (incorrectly) that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR Function. (See the page on Perceptrons for more information.)
Although a single threshold unit is quite limited in its computational power, it has been shown that networks of parallel threshold units can approximate any continuous function from a compact interval of the real numbers into the interval (). This result can be found in Peter Auer, Harald Burgsteiner and Wolfgang Maass "A learning rule for very simple universal approximators consisting of a single layer of perceptrons".〔 〕
A multi-layer neural network can compute a continuous output instead of a step function. A common choice is the so-called logistic function:
: y = \frac, in general form, according to the Chain Rule)

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Feedforward neural network」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.